perm filename IDEAS[W84,JMC]1 blob
sn#740347 filedate 1984-02-01 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ideas[w84,jmc] what has natural language got
C00012 ENDMK
Cā;
ideas[w84,jmc] what has natural language got
1984 Feb 1
This is in further pursuit of the greater power of natural language
and human thought over logical languages. The advantage appears to
be in ambiguity tolerance. Our thoughts and natural language can use
ideas involving concepts that turn out to be ill-defined when closely
examined. It appears that the ambiguity is not merely in the words
we use, e.g. the presence of homonyms, but in the concepts themselves.
We can successfully use certain concepts to solve a problem and only
later discover that we cannot define the concept precisely. Much of
philosophy is concerned with the fact that the concepts are ambiguous
but not much attention has been devoted to the fact that we can use
them successfully anyway. Indeed we can fail to resolve a pointed out
ambiguity, put the problem aside, and continue to use the concept in
the case where it isn't ambiguous.
The lack of this ability in logical languages and in their
variants used in AI shows up in the kind of inflexibility often called
brittleness. Typically the language is developed with one or at most
a few problems in mind. When a problem of a kind not previously
envisaged arises, almost invariably the axioms assumed turn out to
be false. Moreover, the new problem often turns out not to be
expressable in the language that has been provided even though
its English expression may involve the words that have supposedly
been formalized.
Moreover, no-one has developed a technique for expressing
the facts of even simple English narrative in logic, let alone
those expressed in texts like this one.
In spite of these grumbles about logic, we plan to try to
develop a language within a logical language for expressing what
thoughts and natural language. We distinguish between thought and
natural language, because we regard it as unlikely that all thought
is expressable in present natural language. At least there is
a large body of common sense information that is not ordinarily
expressed in natural language. For example, when a person is asked
to express the common sense facts about quantities of liquid and
there transfer among containers and through pipes and other channels,
he has great difficulty in doing so even though he can correctly
predict the outcome of various actions. It seems that common sense
generalities are particularly hard to express.
How can we proceed? I would like to try for a methodology
somewhat different from present day linguistic and philosophical
semantics, because this methodology is already being vigorously
pursued, and I'm unlikely to add much. Here are some possibilities:
1. Translate simple narratives, e.g. Mr. Hug
MRHUG[S76,JMC] into logic and find out where the troubles are.
2. Find out the problems of formalizing bits of acceptable
common sense reasoning. It would be interesting to consider examples
of reasoning about physical situations that can be communicated over
the telephone. Thus we avoid having to look at the physical situation
itself. Perhaps this is too hard. After all a sculptor or an artist
could recreate a plausible image of the scene from a description.
The rest of us can't do that, but presumably have similar abilities
in embryonic form.
**But we've digressed from ambiguity.
What is the simplest example of ambiguity tolerance? Certainly
there is something simpler that the story about attempting to
bribe a public official. What about mother? As mentioned in
some previous file, the word and presumably the concept is taken
by a child variously as a proper name, a two place predicate and
a one palce predicate. In logic we would be tempted to use separate
predicates, but of course we an also introduce an abstract entity
called "mother" and let the others be aspects of it. Whether this
is enough is debatable.
is-a(Mary, mother)
is-r(Mary,Tom, mother)
the(mother,context)
1984 Feb 1
Child machine
Many people have proposed to build a child-machine. It knows little
or nothing about the world but is capable of learning everything
humans know from its experience. Presumably it can understand
language or can learn to do so. This is a plausible idea, but attempts
to realize it have not been successful, usually not even resulting
in published papers. In spite of the fact that the approach has been
so far unsuccessful, trying again is worthwhile provided one recognizes
the difficulties that have been encountered and has ideas for overcoming
them. Here is my idea of what the difficulties are.
1. Representation. Consider the very first thing the child-machine
is to learn. Whatever is learned, whether it be a fact or the appropriate
response to a class of stimuli, it must be represented somehow in the
memory of the machine. Therefore, the designer must provide for this
representation. Here he faces a dilemma. If he wants some sophisticated
fact to be representable, it seems that he must build in quite a few
concepts. But this is the problem the proposers of the child-machine
wants to avoid. The extreme avoiders of this difficulty have a tendency
to build stimulus-response machines. They have two problems. First,
they are usually forced to admit only fixed stimuli and this makes the
machine quite far from a real world child who never sees exactly the
same stimulus twice and must begin with a complex stimulus classifier.
Second, it seems that little that children learn takes a simple
stimulus-response form. Children mainly learn facts which they subsequently
use more-or-less intelligently, i.e. with the aid of inference involving
other facts.
Well, maybe I only know that one difficulty.